A conjugate prior for discrete hierarchical loglinear models
نویسندگان
چکیده
In Bayesian analysis of multi-way contingency tables, the selection of a prior distribution for either the log-linear parameters or the cell probabilities parameters is a major challenge. In this paper, we define a flexible family of conjugate priors for the wide class of discrete hierarchical log-linear models, which includes the class of graphical models. These priors are defined as the Diaconis–Ylvisaker conjugate priors on the log-linear parameters subject to “baseline constraints” under multinomial sampling. We also derive the induced prior on the cell probabilities and show that the induced prior is a generalization of the hyper Dirichlet prior. We show that this prior has several desirable properties and illustrate its usefulness by identifying the most probable decomposable, graphical and hierarchical log-linear models for a six-way contingency table.
منابع مشابه
The conjugate prior for discrete hierarchical log-linear models
In the Bayesian analysis of contingency table data, the selection of a prior distribution for either the log-linear parameters or the cell probabilities parameter is a major challenge. Though the conjugate prior on cell probabilities has been defined by Dawid and Lauritzen (1993) for decomposable graphical models, it has not been identified for the larger class of graphical models Markov with r...
متن کاملAlternative parametrizations and reference priors for decomposable discrete graphical models
For a given discrete decomposable graphical model, we identify several alternative parametrizations, and construct the corresponding reference priors for suitable groupings of the parameters. Specifically, assuming that the cliques of the graph are arranged in a perfect order, the parameters we consider are conditional probabilities of clique-residuals given separators, as well as generalized l...
متن کاملMinimum Φ-divergence Estimator and Hierarchical Testing in Loglinear Models
In this paper we consider inference based on very general divergence measures, under assumptions of multinomial sampling and loglinear models. We define the minimum φ-divergence estimator, which is seen to be a generalization of the maximum likelihood estimator. This estimator is then used in a φ-divergence goodness-of-fit statistic, which is the basis of two new statistics for solving the prob...
متن کاملGraphical Model Structure Learning with 1-Regularization
This work looks at fitting probabilistic graphical models to data when the structure is not known. The main tool to do this is `1-regularization and the more general group `1-regularization. We describe limited-memory quasi-Newton methods to solve optimization problems with these types of regularizers, and we examine learning directed acyclic graphical models with `1-regularization, learning un...
متن کاملPhi-divergence Test Statistics in Multinomial Sampling for Hierarchical Sequences of Loglinear Models with Linear Constraints
We consider nested sequences of hierarchical loglinear models when expected frequencies are subject to linear constraints and we study the problem of finding the model in the the nested sequence that is able to explain more clearly the given data. It will be necessary to give a method to estimate the parameters of the loglinear models and also a procedure to choose the best model among the mode...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009